Image token removal is an efficient augmentation strategy for reducing the cost of computing image features. However, this efficient augmentation strategy has been found to adversely affect the accuracy of CLIP-based training. We hypothesize that removing a large portion of image tokens may improperly discard the semantic content associated with a given text description, thus constituting an incorrect pairing target in CLIP training. To address this issue, we propose an attentive token removal approach for CLIP training, which retains tokens with a high semantic correlation to the text description. The correlation scores are computed in an online fashion using the EMA version of the visual encoder. Our experiments show that the proposed attentive masking approach performs better than the previous method of random token removal for CLIP training. The approach also makes it efficient to apply multiple augmentation views to the image, as well as introducing instance contrastive learning tasks between these views into the CLIP framework. Compared to other CLIP improvements that combine different pre-training targets such as SLIP and MaskCLIP, our method is not only more effective, but also much more efficient. Specifically, using ViT-B and YFCC-15M dataset, our approach achieves $43.9\%$ top-1 accuracy on ImageNet-1K zero-shot classification, as well as $62.7/42.1$ and $38.0/23.2$ I2T/T2I retrieval accuracy on Flickr30K and MS COCO, which are $+1.1\%$, $+5.5/+0.9$, and $+4.4/+1.3$ higher than the SLIP method, while being $2.30\times$ faster. An efficient version of our approach running $1.16\times$ faster than the plain CLIP model achieves significant gains of $+5.3\%$, $+11.3/+8.0$, and $+9.5/+4.9$ on these benchmarks.
translated by 谷歌翻译
Recently, the success of pre-training in text domain has been fully extended to vision, audio, and cross-modal scenarios. The proposed pre-training models of different modalities are showing a rising trend of homogeneity in their model structures, which brings the opportunity to implement different pre-training models within a uniform framework. In this paper, we present TencentPretrain, a toolkit supporting pre-training models of different modalities. The core feature of TencentPretrain is the modular design. The toolkit uniformly divides pre-training models into 5 components: embedding, encoder, target embedding, decoder, and target. As almost all of common modules are provided in each component, users can choose the desired modules from different components to build a complete pre-training model. The modular design enables users to efficiently reproduce existing pre-training models or build brand-new one. We test the toolkit on text, vision, and audio benchmarks and show that it can match the performance of the original implementations.
translated by 谷歌翻译
尽管做出了巨大的努力,但GigapixelS的分类全扫描图像(WSI)被严重限制在整个幻灯片的约束计算资源中,或者使用不同尺度的知识利用有限。此外,以前的大多数尝试都缺乏不确定性估计的能力。通常,病理学家经常共同分析不同的宏伟速度的WSI。如果通过使用单个放大倍率来不确定病理学家,那么他们将反复更改放大倍率以发现组织的各种特征。受病理学家的诊断过程的激励,在本文中,我们为WSI提出了一个可信赖的多尺度分类框架。我们的框架利用视觉变压器作为多部门的骨干,可以共同分类建模,估计显微镜的每种放大倍率的不确定性,并整合了来自不同放大倍率的证据。此外,为了利用WSIS的歧视性补丁并减少对计算资源的需求,我们建议使用注意力推广和非最大抑制作用提出一种新颖的补丁选择模式。为了从经验研究我们的方法的有效性,使用两个基准数据库对我们的WSI分类任务进行了经验实验。获得的结果表明,与最先进的方法相比,可信赖的框架可以显着改善WSI分类性能。
translated by 谷歌翻译
最新的工业推理引擎(例如FASTRASTRANSFORMER1和TURBOTTRANSFORMER)已验证了半精度的浮点(FP16)和8位整数(INT8)量化可以极大地提高模型推断速度。但是,现有的FP16或INT8量化方法太复杂了,使用不当将大大导致性能损害。在本文中,我们开发了一个工具包,供用户轻松量化其模型以进行推理,其中提出了自适应混合精液(SAMP),以通过混合精确体系结构自动控制量化率,以平衡效率和性能。实验结果表明,我们的SAMP工具包比Pytorch和Fertransformer具有更高的速度,同时确保了所需的性能。此外,SAMP基于模块化设计,将令牌,嵌入,编码器和目标层解耦,该层允许用户处理各种下游任务,并且可以将其无缝集成到Pytorch中。
translated by 谷歌翻译
科学文献是高质量的语料库,支持大量自然语言处理(NLP)研究。但是,现有数据集围绕英语,这限制了中国科学NLP的发展。在这项工作中,我们提出了CSL,这是一个大规模的中国科学文献数据集,其中包含396K论文的标题,摘要,关键字和学术领域。据我们所知,CSL是中文中的第一个科学文档数据集。 CSL可以用作中国语料库。同样,该半结构化数据是一种自然注释,可以构成许多监督的NLP任务。基于CSL,我们提出了一个基准,以评估跨科学领域任务的模型的性能,即摘要,关键字生成和文本分类。我们分析了现有文本到文本模型在评估任务上的行为,并揭示了中国科学NLP任务的挑战,该任务为未来的研究提供了宝贵的参考。数据和代码可在https://github.com/ydli-ai/csl上找到
translated by 谷歌翻译
传送消息的时间是许多实际自然语言处理任务的重要元数据,例如主题检测和跟踪(TDT)。 TDT系统旨在通过事件培养新闻文章的语料库,并且在这种情况下,描述相同事件的故事可能在同一时间写入。对TDT的时间建模之前的工作将其考虑在内,但并不能很好地捕获时间与事件的语义性质相互作用。例如,关于热带风暴的故事可能在短时间内写入短时间内,而关于电影发布的故事可能出现超过数周或数月。在我们的工作中,我们设计了一种神经方法,可以将时间和文本信息融入到事件检测的新闻文档的单个表示中。我们微调这些时间感知文件嵌入具有三态损耗架构,将模型集成到下游TDT系统中,并在英语中评估两个基准TDT数据集的系统。在回顾性设置中,我们将聚类算法应用于时间感知嵌入物,并在新闻2013数据集上显示基本电池的大量改进。在线流设置中,我们将文档编码器添加到现有的最先进的TDT管道,并证明它可以使整体性能有益。我们对时代表示和融合算法策略进行消融研究,表明我们所提出的模型优于替代策略。最后,我们探讨模型以检查它如何比以前的TDT系统更有效地处理重复事件。
translated by 谷歌翻译
This paper focuses on designing efficient models with low parameters and FLOPs for dense predictions. Even though CNN-based lightweight methods have achieved stunning results after years of research, trading-off model accuracy and constrained resources still need further improvements. This work rethinks the essential unity of efficient Inverted Residual Block in MobileNetv2 and effective Transformer in ViT, inductively abstracting a general concept of Meta-Mobile Block, and we argue that the specific instantiation is very important to model performance though sharing the same framework. Motivated by this phenomenon, we deduce a simple yet efficient modern \textbf{I}nverted \textbf{R}esidual \textbf{M}obile \textbf{B}lock (iRMB) for mobile applications, which absorbs CNN-like efficiency to model short-distance dependency and Transformer-like dynamic modeling capability to learn long-distance interactions. Furthermore, we design a ResNet-like 4-phase \textbf{E}fficient \textbf{MO}del (EMO) based only on a series of iRMBs for dense applications. Massive experiments on ImageNet-1K, COCO2017, and ADE20K benchmarks demonstrate the superiority of our EMO over state-of-the-art methods, \eg, our EMO-1M/2M/5M achieve 71.5, 75.1, and 78.4 Top-1 that surpass \textbf{SoTA} CNN-/Transformer-based models, while trading-off the model accuracy and efficiency well.
translated by 谷歌翻译
Supervised Question Answering systems (QA systems) rely on domain-specific human-labeled data for training. Unsupervised QA systems generate their own question-answer training pairs, typically using secondary knowledge sources to achieve this outcome. Our approach (called PIE-QG) uses Open Information Extraction (OpenIE) to generate synthetic training questions from paraphrased passages and uses the question-answer pairs as training data for a language model for a state-of-the-art QA system based on BERT. Triples in the form of <subject, predicate, object> are extracted from each passage, and questions are formed with subjects (or objects) and predicates while objects (or subjects) are considered as answers. Experimenting on five extractive QA datasets demonstrates that our technique achieves on-par performance with existing state-of-the-art QA systems with the benefit of being trained on an order of magnitude fewer documents and without any recourse to external reference data sources.
translated by 谷歌翻译
Transformer has achieved impressive successes for various computer vision tasks. However, most of existing studies require to pretrain the Transformer backbone on a large-scale labeled dataset (e.g., ImageNet) for achieving satisfactory performance, which is usually unavailable for medical images. Additionally, due to the gap between medical and natural images, the improvement generated by the ImageNet pretrained weights significantly degrades while transferring the weights to medical image processing tasks. In this paper, we propose Bootstrap Own Latent of Transformer (BOLT), a self-supervised learning approach specifically for medical image classification with the Transformer backbone. Our BOLT consists of two networks, namely online and target branches, for self-supervised representation learning. Concretely, the online network is trained to predict the target network representation of the same patch embedding tokens with a different perturbation. To maximally excavate the impact of Transformer from limited medical data, we propose an auxiliary difficulty ranking task. The Transformer is enforced to identify which branch (i.e., online/target) is processing the more difficult perturbed tokens. Overall, the Transformer endeavours itself to distill the transformation-invariant features from the perturbed tokens to simultaneously achieve difficulty measurement and maintain the consistency of self-supervised representations. The proposed BOLT is evaluated on three medical image processing tasks, i.e., skin lesion classification, knee fatigue fracture grading and diabetic retinopathy grading. The experimental results validate the superiority of our BOLT for medical image classification, compared to ImageNet pretrained weights and state-of-the-art self-supervised learning approaches.
translated by 谷歌翻译
Knowledge graph embedding (KGE), which maps entities and relations in a knowledge graph into continuous vector spaces, has achieved great success in predicting missing links in knowledge graphs. However, knowledge graphs often contain incomplete triples that are difficult to inductively infer by KGEs. To address this challenge, we resort to analogical inference and propose a novel and general self-supervised framework AnKGE to enhance KGE models with analogical inference capability. We propose an analogical object retriever that retrieves appropriate analogical objects from entity-level, relation-level, and triple-level. And in AnKGE, we train an analogy function for each level of analogical inference with the original element embedding from a well-trained KGE model as input, which outputs the analogical object embedding. In order to combine inductive inference capability from the original KGE model and analogical inference capability enhanced by AnKGE, we interpolate the analogy score with the base model score and introduce the adaptive weights in the score function for prediction. Through extensive experiments on FB15k-237 and WN18RR datasets, we show that AnKGE achieves competitive results on link prediction task and well performs analogical inference.
translated by 谷歌翻译